Constrained Formulations for Neural Network
نویسندگان
چکیده
In this paper, we formulate neural-network training as a constrained optimization problem instead of the traditional formulation based on unconstrained optimization. We show that constraints violated during a search provide additional force to help escape from local minima using our newly developed constrained simulated annealing (CSA) algorithm. We demonstrate the merits of our approach by training neural networks to solve the two-spiral problem. To enhance the search, we have developed a strategy to adjust the gain factor of the activation function. We show converged training results for networks with 4, 5, and 6 hidden units, respectively. Our work is the rst successful attempt to solve the two-spiral problem with 19 weights. Traditional supervised neural-network training is formulated as an unconstrained optimization problem of minimizing the sum of squared errors of the output over all training patterns: the number of training patterns, and w is a vector of weights of the neural network trained. In order for a neural network to generalize well to unseen patterns, we like it to have a small number of weights. Training is diicult in this case because the terrain modeled by (1) is often very rugged, and existing local-search algorithms may get stuck easily in deep local minima. Although global search can help escape from local minima, it has similar diiculties when the terrain is rugged. Instead of using an unconstrained formulation (1), we propose to formulate neural-network training as a constrained optimization problem that includes constraints on each training pattern. An unsatissed training pattern in a local minimum of the weight space may provide an additional force to guide a search out of the local minimum. The constrained formulation considered in this paper is: min w
منابع مشابه
An efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems
Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...
متن کاملStatic Security Constrained Generation Scheduling Using Sensitivity Characteristics of Neural Network
This paper proposes a novel approach for generation scheduling using sensitivitycharacteristic of a Security Analyzer Neural Network (SANN) for improving static securityof power system. In this paper, the potential overloading at the post contingency steadystateassociated with each line outage is proposed as a security index which is used forevaluation and enhancement of system static security....
متن کاملMulti-objective optimization of geometrical parameters for constrained groove pressing of aluminium sheet using a neural network and the genetic algorithm
One of sheet severe plastic deformation (SPD) operation, namely constrained groove pressing (CGP), is investigated here in order to specify the optimum values for geometrical variables of this process on pure aluminium sheets. With this regard, two different objective functions, i.e. the uniformity in the effective strain distribution and the necessary force per unit weight of the specimen, are...
متن کاملPROJECTED DYNAMICAL SYSTEMS AND OPTIMIZATION PROBLEMS
We establish a relationship between general constrained pseudoconvex optimization problems and globally projected dynamical systems. A corresponding novel neural network model, which is globally convergent and stable in the sense of Lyapunov, is proposed. Both theoretical and numerical approaches are considered. Numerical simulations for three constrained nonlinear optimization problems a...
متن کاملConstrained Formulations for Neural Network Training and Their Applications to Solve the Two-spiral Problem 1 Formulation of Supervised Neural- Network Training
In this paper, we formulate neural-network training as a constrained optimization problem instead of the traditional formulation based on unconstrained optimization. We show that constraints violated during a search provide additional force to help escape from local minima using our newly developed constrained simulated annealing (CSA) algorithm. We demonstrate the merits of our approach by tra...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2000